AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
MLX Quantization Optimization

# MLX Quantization Optimization

Qvikhr 2.5 1.5B Instruct SMPO MLX 8bit
Apache-2.0
This is a 1.5B-parameter Russian-English bilingual instruction fine-tuned language model optimized with the MLX framework, supporting 8-bit quantized inference
Large Language Model Transformers Supports Multiple Languages
Q
Vikhrmodels
234
2
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase